94 research outputs found
Hardness of the (Approximate) Shortest Vector Problem: A Simple Proof via Reed-Solomon Codes
We give a
simple proof that the (approximate, decisional) Shortest Vector Problem is
\NP-hard under a randomized reduction. Specifically, we show that for any and any constant , the -approximate problem
in the norm (-\GapSVP_p) is not in unless \NP
\subseteq \mathsf{RP}. Our proof follows an approach pioneered by Ajtai (STOC
1998), and strengthened by Micciancio (FOCS 1998 and SICOMP 2000), for showing
hardness of -\GapSVP_p using locally dense lattices. We construct
such lattices simply by applying "Construction A" to Reed-Solomon codes with
suitable parameters, and prove their local density via an elementary argument
originally used in the context of Craig lattices.
As in all known \NP-hardness results for \GapSVP_p with , our
reduction uses randomness. Indeed, it is a notorious open problem to prove
\NP-hardness via a deterministic reduction. To this end, we additionally
discuss potential directions and associated challenges for derandomizing our
reduction. In particular, we show that a close deterministic analogue of our
local density construction would improve on the state-of-the-art explicit
Reed-Solomon list-decoding lower bounds of Guruswami and Rudra (STOC 2005 and
IEEE Trans. Inf. Theory 2006).
As a related contribution of independent interest, we also give a
polynomial-time algorithm for decoding -dimensional "Construction A
Reed-Solomon lattices" (with different parameters than those used in our
hardness proof) to a distance within an factor of
Minkowski's bound. This asymptotically matches the best known distance for
decoding near Minkowski's bound, due to Mook and Peikert (IEEE Trans. Inf.
Theory 2022), whose work we build on with a somewhat simpler construction and
analysis
Hardness of Bounded Distance Decoding on Lattices in ?_p Norms
Bounded Distance Decoding BDD_{p,?} is the problem of decoding a lattice when the target point is promised to be within an ? factor of the minimum distance of the lattice, in the ?_p norm. We prove that BDD_{p, ?} is NP-hard under randomized reductions where ? ? 1/2 as p ? ? (and for ? = 1/2 when p = ?), thereby showing the hardness of decoding for distances approaching the unique-decoding radius for large p. We also show fine-grained hardness for BDD_{p,?}. For example, we prove that for all p ? [1,?) ? 2? and constants C > 1, ? > 0, there is no 2^((1-?)n/C)-time algorithm for BDD_{p,?} for some constant ? (which approaches 1/2 as p ? ?), assuming the randomized Strong Exponential Time Hypothesis (SETH). Moreover, essentially all of our results also hold (under analogous non-uniform assumptions) for BDD with preprocessing, in which unbounded precomputation can be applied to the lattice before the target is available.
Compared to prior work on the hardness of BDD_{p,?} by Liu, Lyubashevsky, and Micciancio (APPROX-RANDOM 2008), our results improve the values of ? for which the problem is known to be NP-hard for all p > p? ? 4.2773, and give the very first fine-grained hardness for BDD (in any norm). Our reductions rely on a special family of "locally dense" lattices in ?_p norms, which we construct by modifying the integer-lattice sparsification technique of Aggarwal and Stephens-Davidowitz (STOC 2018)
How (Not) to Instantiate Ring-LWE
The \emph{learning with errors over rings} (Ring-LWE) problem---or
more accurately, family of problems---has emerged as a promising
foundation for cryptography due to its practical efficiency,
conjectured quantum resistance, and provable \emph{worst-case
hardness}: breaking certain instantiations of Ring-LWE is at least
as hard as quantumly approximating the Shortest Vector Problem on
\emph{any} ideal lattice in the ring.
Despite this hardness guarantee, several recent works have shown that
certain instantiations of Ring-LWE can be broken by relatively simple
attacks. While the affected instantiations are not supported by
worst-case hardness theorems (and were not ever proposed for
cryptographic purposes), this state of affairs raises natural
questions about what other instantiations might be vulnerable, and in
particular whether certain classes of rings are inherently unsafe for
Ring-LWE.
This work comprehensively reviews the known attacks on Ring-LWE and
vulnerable instantiations. We give a new, unified exposition which
reveals an elementary geometric reason why the attacks work, and
provide rigorous analysis to explain certain phenomena that were
previously only exhibited by experiments. In all cases, the
insecurity of an instantiation is due to the fact that the error
distribution is insufficiently ``well spread\u27\u27 relative to the ring.
In particular, the insecure instantiations use the so-called
\emph{non-dual} form of Ring-LWE, together with \emph{spherical} error
distributions that are much narrower and of a very different shape
than the ones supported by hardness proofs.
On the positive side, we show that any Ring-LWE instantiation which
satisfies (or only almost satisfies) the hypotheses of the
``worst-case hardness of search\u27\u27 theorem is \emph{provably immune} to
broad generalizations of the above-described attacks: the running time
divided by advantage is at least exponential in the degree of the
ring. This holds for the ring of integers in \emph{any} number field,
so the rings themselves are not the source of insecurity in the
vulnerable instantiations. Moreover, the hypotheses of the worst-case
hardness theorem are \emph{nearly minimal} ones which provide these
immunity guarantees
Improved Hardness of BDD and SVP Under Gap-(S)ETH
We show improved fine-grained hardness of two key lattice problems in the
norm: Bounded Distance Decoding to within an factor of the
minimum distance () and the (decisional)
-approximate Shortest Vector Problem (),
assuming variants of the Gap (Strong) Exponential Time Hypothesis (Gap-(S)ETH).
Specifically, we show:
1. For all , there is no -time algorithm for
for any constant ,
where and
is the kissing-number constant, unless non-uniform Gap-ETH is false.
2. For all , there is no -time algorithm for
for any constant , where
is explicit and satisfies for , , and as , unless randomized Gap-ETH is false.
3. For all and all , there
is no -time algorithm for for any constant
, where is explicit and
satisfies as for any fixed , unless non-uniform Gap-SETH is false.
4. For all , , and all , there is no -time algorithm for for
some constant , where is explicit and satisfies as , unless randomized Gap-SETH is false.Comment: ITCS 202
Outsourcing Computation: the Minimal Refereed Mechanism
We consider a setting where a verifier with limited computation power
delegates a resource intensive computation task---which requires a
computation tableau---to two provers where the provers are rational in that
each prover maximizes their own payoff---taking into account losses incurred by
the cost of computation. We design a mechanism called the Minimal Refereed
Mechanism (MRM) such that if the verifier has time and
space computation power, then both provers will provide a
honest result without the verifier putting any effort to verify the results.
The amount of computation required for the provers (and thus the cost) is a
multiplicative -factor more than the computation itself, making this
schema efficient especially for low-space computations.Comment: 17 pages, 1 figure; WINE 2019: The 15th Conference on Web and
Internet Economic
Robustness of the Learning with Errors Assumption
Starting with the work of Ishai-Sahai-Wagner and Micali-Reyzin, a new goal has been set within the theory of cryptography community, to design cryptographic primitives that are secure against large classes of side-channel attacks. Recently, many works have focused on designing various cryptographic primitives that are robust (retain security) even when the secret key is “leaky”, under various intractability assumptions. In this work we propose to take a step back and ask a more basic question: which of our cryptographic assumptions (rather than cryptographic schemes) are robust in presence of leakage of their underlying secrets?
Our main result is that the hardness of the learning with error (LWE) problem implies its hardness with leaky secrets. More generally, we show that the standard LWE assumption implies that LWE is secure even if the secret is taken from an arbitrary distribution with sufficient entropy, and even in the presence of hard-to-invert auxiliary inputs. We exhibit various applications of this result.
1. Under the standard LWE assumption, we construct a symmetric-key encryption scheme that is robust to secret key leakage, and more generally maintains security even if the secret key is taken from an arbitrary distribution with sufficient entropy (and even in the presence of hard-to-invert auxiliary inputs).
2. Under the standard LWE assumption, we construct a (weak) obfuscator for the class of point functions with multi-bit output. We note that in most schemes that are known to be robust to leakage, the parameters of the scheme depend on the maximum leakage the system can tolerate, and hence the efficiency degrades with the maximum anticipated leakage, even if no leakage occurs at all! In contrast, the fact that we rely on a robust assumption allows us to construct a single symmetric-key encryption scheme, with parameters that are independent of the anticipated leakage, that is robust to any leakage (as long as the secret key has sufficient entropy left over). Namely, for any k < n (where n is the size of the secret key), if the secret key has only entropy k, then the security relies on the LWE assumption with secret size roughly k
Challenges for Ring-LWE
As lattice cryptography becomes more widely used in practice, there is
an increasing need for further cryptanalytic effort and
higher-confidence security estimates for its underlying computational
problems. Of particular interest is a class of problems used in many
recent implementations, namely, Learning With Errors (LWE), its more
efficient ring-based variant Ring-LWE, and their ``deterministic
error\u27\u27 counterparts Learning With Rounding (LWR) and Ring-LWR.
To facilitate such analysis, in this work we give a broad collection
of challenges for concrete Ring-LWE and Ring-LWR instantiations over
cyclotomics rings. The challenges cover a wide variety of
instantiations, involving two-power and non-two-power cyclotomics;
moduli of various sizes and arithmetic forms; small and large numbers
of samples; and error distributions satisfying the bounds from
worst-case hardness theorems related to ideal lattices, along with
narrower errors that still appear to yield hard instantiations. We
estimate the hardness of each challenge by giving the approximate
Hermite factor and BKZ block size needed to solve it via
lattice-reduction attacks.
A central issue in the creation of challenges for LWE-like problems is
that dishonestly generated instances can be much harder to solve than
properly generated ones, or even impossible. To address this, we
devise and implement a simple, non-interactive, publicly verifiable
protocol which gives reasonably convincing evidence that the
challenges are properly distributed, or at least not much harder than
claimed
On Adaptively Secure Multiparty Computation with a Short CRS
In the setting of multiparty computation, a set of mutually distrusting parties wish to securely compute a joint function of their private inputs. A protocol is adaptively secure if honest parties might get corrupted \emph{after} the protocol has started. Recently (TCC 2015) three constant-round adaptively secure
protocols were presented [CGP15, DKR15, GP15]. All three constructions assume that the parties have access to a \emph{common reference string} (CRS) whose size depends on the function to compute, even when facing semi-honest adversaries. It is unknown whether constant-round adaptively secure protocols exist, without assuming access to such a CRS.
In this work, we study adaptively secure protocols which only rely on a short CRS that is independent on the function to compute.
First, we raise a subtle issue relating to the usage of \emph{non-interactive non-committing encryption} within security proofs in the UC framework, and explain how to overcome it. We demonstrate the problem in the security proof of the adaptively secure oblivious-transfer protocol from [CLOS02] and provide a complete proof of this protocol.
Next, we consider the two-party setting where one of the parties has a polynomial-size input domain, yet the other has no constraints on its input. We show that assuming the existence of adaptively secure oblivious transfer, every deterministic functionality can be computed with adaptive security in a constant number of rounds.
Finally, we present a new primitive called \emph{non-committing indistinguishability obfuscation}, and show that this primitive is \emph{complete} for constructing adaptively secure protocols with round complexity independent of the function
Privately Constraining and Programming PRFs, the LWE Way
*Constrained* pseudorandom functions allow for delegating
``constrained\u27\u27 secret keys that let one compute the function at
certain authorized inputs---as specified by a constraining
predicate---while keeping the function value at unauthorized inputs
pseudorandom. In the *constraint-hiding* variant, the
constrained key hides the predicate. On top of this,
*programmable* variants allow the delegator to explicitly set the
output values yielded by the delegated key for a particular set of
unauthorized inputs.
Recent years have seen rapid progress on applications and
constructions of these objects for progressively richer constraint
classes, resulting most recently in constraint-hiding constrained PRFs
for arbitrary polynomial-time constraints from Learning With
Errors~(LWE) [Brakerski, Tsabary, Vaikuntanathan, and Wee, TCC\u2717],
and privately programmable PRFs from indistinguishability obfuscation
(iO) [Boneh, Lewi, and Wu, PKC\u2717].
In this work we give a unified approach for constructing both of the
above kinds of PRFs from LWE with subexponential
approximation factors. Our constructions
follow straightforwardly from a new notion we call a
*shift-hiding shiftable function*, which allows for deriving a
key for the sum of the original function and any desired hidden
shift function. In particular, we obtain the first privately
programmable PRFs from non-iO assumptions
- …